124 research outputs found
NCP: Neural Correspondence Prior for Effective Unsupervised Shape Matching
We present Neural Correspondence Prior (NCP), a new paradigm for computing
correspondences between 3D shapes. Our approach is fully unsupervised and can
lead to high-quality correspondences even in challenging cases such as sparse
point clouds or non-isometric meshes, where current methods fail. Our first key
observation is that, in line with neural priors observed in other domains,
recent network architectures on 3D data, even without training, tend to produce
pointwise features that induce plausible maps between rigid or non-rigid
shapes. Secondly, we show that given a noisy map as input, training a feature
extraction network with the input map as supervision tends to remove artifacts
from the input and can act as a powerful correspondence denoising mechanism,
both between individual pairs and within a collection. With these observations
in hand, we propose a two-stage unsupervised paradigm for shape matching by (i)
performing unsupervised training by adapting an existing approach to obtain an
initial set of noisy matches, and (ii) using these matches to train a network
in a supervised manner. We demonstrate that this approach significantly
improves the accuracy of the maps, especially when trained within a collection.
We show that NCP is data-efficient, fast, and achieves state-of-the-art results
on many tasks. Our code can be found online: https://github.com/pvnieo/NCP.Comment: NeurIPS 2022, 10 pages, 9 figure
Understanding and Improving Features Learned in Deep Functional Maps
Deep functional maps have recently emerged as a successful paradigm for
non-rigid 3D shape correspondence tasks. An essential step in this pipeline
consists in learning feature functions that are used as constraints to solve
for a functional map inside the network. However, the precise nature of the
information learned and stored in these functions is not yet well understood.
Specifically, a major question is whether these features can be used for any
other objective, apart from their purely algebraic role in solving for
functional map matrices. In this paper, we show that under some mild
conditions, the features learned within deep functional map approaches can be
used as point-wise descriptors and thus are directly comparable across
different shapes, even without the necessity of solving for a functional map at
test time. Furthermore, informed by our analysis, we propose effective
modifications to the standard deep functional map pipeline, which promote
structural properties of learned features, significantly improving the matching
results. Finally, we demonstrate that previously unsuccessful attempts at using
extrinsic architectures for deep functional map feature extraction can be
remedied via simple architectural changes, which encourage the theoretical
properties suggested by our analysis. We thus bridge the gap between intrinsic
and extrinsic surface-based learning, suggesting the necessary and sufficient
conditions for successful shape matching. Our code is available at
https://github.com/pvnieo/clover.Comment: 16 pages, 8 figures, 8 tables, to be published in 2023 The IEEE
Conference on Computer Vision and Pattern Recognition (CVPR
Joint Symmetry Detection and Shape Matching for Non-Rigid Point Cloud
Despite the success of deep functional maps in non-rigid 3D shape matching,
there exists no learning framework that models both self-symmetry and shape
matching simultaneously. This is despite the fact that errors due to symmetry
mismatch are a major challenge in non-rigid shape matching. In this paper, we
propose a novel framework that simultaneously learns both self symmetry as well
as a pairwise map between a pair of shapes. Our key idea is to couple a self
symmetry map and a pairwise map through a regularization term that provides a
joint constraint on both of them, thereby, leading to more accurate maps. We
validate our method on several benchmarks where it outperforms many competitive
baselines on both tasks.Comment: Under Review. arXiv admin note: substantial text overlap with
arXiv:2110.0299
Generalizable Local Feature Pre-training for Deformable Shape Analysis
Transfer learning is fundamental for addressing problems in settings with
little training data. While several transfer learning approaches have been
proposed in 3D, unfortunately, these solutions typically operate on an entire
3D object or even scene-level and thus, as we show, fail to generalize to new
classes, such as deformable organic shapes. In addition, there is currently a
lack of understanding of what makes pre-trained features transferable across
significantly different 3D shape categories. In this paper, we make a step
toward addressing these challenges. First, we analyze the link between feature
locality and transferability in tasks involving deformable 3D objects, while
also comparing different backbones and losses for local feature pre-training.
We observe that with proper training, learned features can be useful in such
tasks, but, crucially, only with an appropriate choice of the receptive field
size. We then propose a differentiable method for optimizing the receptive
field within 3D transfer learning. Jointly, this leads to the first learnable
features that can successfully generalize to unseen classes of 3D shapes such
as humans and animals. Our extensive experiments show that this approach leads
to state-of-the-art results on several downstream tasks such as segmentation,
shape correspondence, and classification. Our code is available at
\url{https://github.com/pvnieo/vader}.Comment: 16 pages, 14 figures, 7 tables, to be published in The IEEE
Conference on Computer Vision and Pattern Recognition (CVPR
AtomSurf : Surface Representation for Learning on Protein Structures
Recent advancements in Cryo-EM and protein structure prediction algorithms
have made large-scale protein structures accessible, paving the way for machine
learning-based functional annotations.The field of geometric deep learning
focuses on creating methods working on geometric data. An essential aspect of
learning from protein structures is representing these structures as a
geometric object (be it a grid, graph, or surface) and applying a learning
method tailored to this representation. The performance of a given approach
will then depend on both the representation and its corresponding learning
method.
In this paper, we investigate representing proteins as and incorporate them into an established representation benchmark.
Our first finding is that despite promising preliminary results, the surface
representation alone does not seem competitive with 3D grids. Building on this,
we introduce a synergistic approach, combining surface representations with
graph-based methods, resulting in a general framework that incorporates both
representations in learning. We show that using this combination, we are able
to obtain state-of-the-art results across . Our code
and data can be found online: https://github.com/Vincentx15/atom2D .Comment: 10 page
SRFeat: Learning Locally Accurate and Globally Consistent Non-Rigid Shape Correspondence
In this work, we present a novel learning-based framework that combines the
local accuracy of contrastive learning with the global consistency of geometric
approaches, for robust non-rigid matching. We first observe that while
contrastive learning can lead to powerful point-wise features, the learned
correspondences commonly lack smoothness and consistency, owing to the purely
combinatorial nature of the standard contrastive losses. To overcome this
limitation we propose to boost contrastive feature learning with two types of
smoothness regularization that inject geometric information into correspondence
learning. With this novel combination in hand, the resulting features are both
highly discriminative across individual points, and, at the same time, lead to
robust and consistent correspondences, through simple proximity queries. Our
framework is general and is applicable to local feature learning in both the 3D
and 2D domains. We demonstrate the superiority of our approach through
extensive experiments on a wide range of challenging matching benchmarks,
including 3D non-rigid shape correspondence and 2D image keypoint matching.Comment: 3DV 2022. Code and data: https://github.com/craigleili/SRFea
Persistence-based Pooling for Shape Pose Recognition
International audienceIn this paper, we propose a novel pooling approach for shape classification and recognition using the bag-of-words pipeline, based on topological persistence, a recent tool from Topological Data Analysis. Our technique extends the standard max-pooling, which summarizes the distribution of a visual feature with a single number, thereby losing any notion of spatiality. Instead, we propose to use topological persistence, and the derived persistence diagrams, to provide significantly more informative and spatially sensitive characterizations of the feature functions, which can lead to better recognition performance. Unfortunately, despite their conceptual appeal, persistence diagrams are difficult to handle , since they are not naturally represented as vectors in Euclidean space and even the standard metric, the bottleneck distance is not easy to compute. Furthermore, classical distances between diagrams, such as the bottleneck and Wasserstein distances, do not allow to build positive definite kernels that can be used for learning. To handle this issue, we provide a novel way to transform persistence diagrams into vectors, in which comparisons are trivial. Finally, we demonstrate the performance of our construction on the Non-Rigid 3D Human Models SHREC 2014 dataset, where we show that topological pooling can provide significant improvements over the standard pooling methods for the shape pose recognition within the bag-of-words pipeline
- …